Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Large language models (LLMs) have revolution- ized machine learning due to their ability to cap- ture complex interactions between input features. Popular post-hoc explanation methods like SHAP provide marginal feature attributions, while their extensions to interaction importances only scale to small input lengths (≈20). We propose Spectral Ex- plainer (SPEX), a model-agnostic interaction attri- bution algorithm that efficiently scales to large input lengths (≈1000). SPEX exploits underlying nat- ural sparsity among interactions—common in real- world data—and applies a sparse Fourier transform using a channel decoding algorithm to efficiently identify important interactions. We perform exper- iments across three difficult long-context datasets that require LLMs to utilize interactions between inputs to complete the task. For large inputs, SPEX outperforms marginal attribution methods by up to 20% in terms of faithfully reconstructing LLM out- puts. Further, SPEX successfully identifies key fea- tures and interactions that strongly influence model output. For one of our datasets, HotpotQA, SPEX provides interactions that align with human annota- tions. Finally, we use our model-agnostic approach to generate explanations to demonstrate abstract rea- soning in closed-source LLMs (GPT-4o mini) and compositional reasoning in vision-language models.more » « lessFree, publicly-accessible full text available May 1, 2026
-
Modern data aggregation often involves a platform collecting data from a network of users with various privacy options. Platforms must solve the problem of how to allocate incentives to users to convince them to share their data. This paper puts forth an idea for a \textit{fair} amount to compensate users for their data at a given privacy level based on an axiomatic definition of fairness, along the lines of the celebrated Shapley value. To the best of our knowledge, these are the first fairness concepts for data that explicitly consider privacy constraints. We also formulate a heterogeneous federated learning problem for the platform with privacy level options for users. By studying this problem, we investigate the amount of compensation users receive under fair allocations with different privacy levels, amounts of data, and degrees of heterogeneity. We also discuss what happens when the platform is forced to design fair incentives. Under certain conditions we find that when privacy sensitivity is low, the platform will set incentives to ensure that it collects all the data with the lowest privacy options. When the privacy sensitivity is above a given threshold, the platform will provide no incentives to users. Between these two extremes, the platform will set the incentives so some fraction of the users chooses the higher privacy option and the others chooses the lower privacy option.more » « less
An official website of the United States government

Full Text Available